Goto

Collaborating Authors

 decision boundary




Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization

Neural Information Processing Systems

However, SSA T suffers from catastrophic overfit-ting (CO), a phenomenon that leads to a severely distorted classifier, making it vulnerable to multi-step adversarial attacks. In this work, we observe that some adversarial examples generated on the SSA T -trained network exhibit anomalous behaviour, that is, although these training samples are generated by the inner maximization process, their associated loss decreases instead, which we named abnormal adversarial examples (AAEs).





Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization

Neural Information Processing Systems

To improve the robustness of deep classifiers against adversarial perturbations, many approaches have been proposed, such as designing new architectures with better robustness properties (e.g., Lipschitz-capped networks), or modifying the